Convergence detection for optimization algorithms: Approximate-KKT stopping criterion when Lagrange multipliers are not available
نویسندگان
چکیده
In this paper we investigate how to efficiently apply ApproximateKarush-Kuhn-Tucker (AKKT) proximity measures as stopping criteria for optimization algorithms that do not generate approximations to Lagrange multipliers, in particular, Genetic Algorithms. We prove that for a wide range of constrained optimization problems the KKT error measurement tends to zero. We also develop a simple model to compute the KKT error measure requiring only the solution of a non-negative linear least squares problem. Our numerical experiments show the efficiency of the strategy.
منابع مشابه
On the behaviour of constrained optimization methods when Lagrange multipliers do not exist
Sequential optimality conditions are related to stopping criteria for nonlinear programming algorithms. Local minimizers of continuous optimization problems satisfy these conditions without constraint qualifications. It is interesting to discover whether well-known optimization algorithms generate primal-dual sequences that allow one to detect that a sequential optimality condition holds. When ...
متن کاملErgodic Convergence in Subgradient Optimization
When nonsmooth, convex minimizationproblems are solved by subgradientoptimizationmethods, the subgradients used will in general not accumulate to subgradients which verify the optimal-ity of a solution obtained in the limit. It is therefore not a straightforward task to monitor the progress of a subgradient method in terms of the approximate fulllment of optimality conditions. Further, certain ...
متن کاملOn the duality of quadratic minimization problems using pseudo inverses
In this paper we consider the minimization of a positive semidefinite quadratic form, having a singular corresponding matrix $H$. We state the dual formulation of the original problem and treat both problems only using the vectors $x in mathcal{N}(H)^perp$ instead of the classical approach of convex optimization techniques such as the null space method. Given this approach and based on t...
متن کاملA Practical General Approximation Criterion for Methods of Multipliers Based on Bregman Distances a Practical General Approximation Criterion for Methods of Multipliers Based on Bregman Distances Acknowledgements: I Would like to Thank
This paper demonstrates that for generalized methods of multipliers for convex programming based on Bregman distance kernels | including the classical quadratic method of multipliers | the minimization of the augmented Lagrangian can be truncated using a simple, generally implementable stopping criterion based only on the norms of the primal iterate and the gradient (or a subgradient) of the au...
متن کاملA second-order sequential optimality condition associated to the convergence of optimization algorithms
Sequential optimality conditions have recently played an important role on the analysis of the global convergence of optimization algorithms towards first-order stationary points, justifying their stopping criteria. In this paper we introduce a sequential optimality condition that takes into account second-order information and that allows us to improve the global convergence assumptions of sev...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید
ثبت ناماگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید
ورودعنوان ژورنال:
- Oper. Res. Lett.
دوره 43 شماره
صفحات -
تاریخ انتشار 2015